60 research outputs found

    Lightweight adaptive filtering for efficient learning and updating of probabilistic models

    No full text
    Adaptive software systems are designed to cope with unpredictable and evolving usage behaviors and environmental conditions. For these systems reasoning mechanisms are needed to drive evolution, which are usually based on models capturing relevant aspects of the running software. The continuous update of these models in evolving environments requires efficient learning procedures, having low overhead and being robust to changes. Most of the available approaches achieve one of these goals at the price of the other. In this paper we propose a lightweight adaptive filter to accurately learn time-varying transition probabilities of discrete time Markov models, which provides robustness to noise and fast adaptation to changes with a very low overhead. A formal stability, unbiasedness and consistency assessment of the learning approach is provided, as well as an experimental comparison with state-of-the-art alternatives

    20th Workshop on Automotive Software Engineering (ASE’23)

    Get PDF
    Software-based systems play an increasingly important role and enable most innovations in modern cars. This workshop will address various topics related to automotive software development. The participants will discuss appropriate methods, techniques, and tools needed to address the most current challenges for researchers and practitioners

    A critical evaluation of spectrum-based fault localization techniques on a large-scale software system

    Get PDF
    In the past, spectrum-based fault localization (SBFL) techniques have been developed to pinpoint a fault location in a program given a set of failing and successful test executions. Most of the algorithms use similarity coefficients and have only been evaluated on established but small benchmark programs from the Software-artifact Infrastructure Repository (SIR). In this paper, we evaluate the feasibility of applying 33 state-of-the-art SBFL techniques to a large real-world project, namely ASPECTJ. From an initial set of 350 faulty version from the iBugs repository of ASPECTJ we manually classified 88 bugs where SBFL techniques are suitable. Notably, only 11 bugs of these bugs can be found after examining the 1000 most suspicious lines and on average 250 source code files need to be inspected per bug. Based on these results, the study showcases the limitations of current SBFL techniques on a larger program

    Synthesising interprocedural bit-precise termination proofs

    Get PDF
    Proving program termination is key to guaranteeing absence of undesirable behaviour, such as hanging programs and even security vulnerabilities such as denial-of-service attacks. To make termination checks scale to large systems, interprocedural termination analysis seems essential, which is a largely unexplored area of research in termination analysis, where most effort has focussed on difficult single-procedure problems. We present a modular termination analysis for C programs using template-based interprocedural summarisation. Our analysis combines a context-sensitive, over-approximating forward analysis with the inference of under-approximating preconditions for termination. Bit-precise termination arguments are synthesised over lexicographic linear ranking function templates. Our experimental results show that our tool 2LS outperforms state-of-the-art alternatives, and demonstrate the clear advantage of interprocedural reasoning over monolithic analysis in terms of efficiency, while retaining comparable precision

    Analysis of spatial distributions of road accidents

    Get PDF
    Traffic accidents result in life and financial loss to the society. In developing countries traffic fatalities are comparable to other leading causes of death. The need for the analysis of the spatial distribution of traffic accidents, as an aid to select the most appropriate type of accident reduction programme (e.g. site, route and area plans) and assessing the effectiveness of such plans after implementation, is very important. The current practice (e.g. visual examination) for assessing the spatial distribution of accidents is reviewed. In this thesis, various methods for the statistical analysis of spatial distributions of accidents (including quadrat and nearest - neighbour methods) are reviewed and further improvements are described. Accidents are random events subject to both temporal and spatial variation. The basic variables for accident analysis are; distance and direction of accident locations in terms of North and East co-ordinates, azimuth, and the year of the accident. A new method for analysing the spatial pattern is proposed, whereby detection of a particular pattern will indicate which type of accident reduction programme is most appropriate. The method distinguishes the spatial distribution (point cluster, line cluster, area cluster or a completely spatially random distribution) of accidents in different types of road networks (regular or irregular and dense or sparse). The method can also help assessment of the changes in spatial distributions of accidents

    Teaching computer language handling - From compiler theory to meta-modelling

    Get PDF
    Most universities teach computer language handling by mainly focussing on compiler theory, although MDA (model-driven architecture) and meta-modelling are increasingly important in the software industry as well as in computer science. In this article, we investigate how traditional compiler theory compares to meta-modelling with regard to formally defining the different aspects of a language, and how we can expand the focus in computer language handling courses to also include meta-model-based approaches. We give an outline of a computer language handling course that covers both paradigms, and share some experiences from running a course based on this outline at the University of Agder

    Probabilistic Model-Based Safety Analysis

    Full text link
    Model-based safety analysis approaches aim at finding critical failure combinations by analysis of models of the whole system (i.e. software, hardware, failure modes and environment). The advantage of these methods compared to traditional approaches is that the analysis of the whole system gives more precise results. Only few model-based approaches have been applied to answer quantitative questions in safety analysis, often limited to analysis of specific failure propagation models, limited types of failure modes or without system dynamics and behavior, as direct quantitative analysis is uses large amounts of computing resources. New achievements in the domain of (probabilistic) model-checking now allow for overcoming this problem. This paper shows how functional models based on synchronous parallel semantics, which can be used for system design, implementation and qualitative safety analysis, can be directly re-used for (model-based) quantitative safety analysis. Accurate modeling of different types of probabilistic failure occurrence is shown as well as accurate interpretation of the results of the analysis. This allows for reliable and expressive assessment of the safety of a system in early design stages

    Reliability Analysis of Component-Based Systems with Multiple Failure Modes

    Get PDF
    This paper presents a novel approach to the reliability modeling and analysis of a component-based system that allows dealing with multiple failure modes and studying the error propagation among components. The proposed model permits to specify the components attitude to produce, propagate, transform or mask different failure modes. These component-level reliability specifications together with information about systems global structure allow precise estimation of reliability properties by means of analytical closed formulas, probabilistic modelchecking or simulation methods. To support the rapid identification of components that could heavily affect systems reliability, we also show how our modeling approach easily support the automated estimation of the system sensitivity to variations in the reliability properties of its components. The results of this analysis allow system designers and developers to identify critical components where it is worth spending additional improvement efforts
    corecore